162 research outputs found

    Physical Layer Cooperation:Theory and Practice

    Get PDF
    Information theory has long pointed to the promise of physical layer cooperation in boosting the spectral efficiency of wireless networks. Yet, the optimum relaying strategy to achieve the network capacity has till date remained elusive. Recently however, a relaying strategy termed Quantize-Map-and-Forward (QMF) was proved to achieve the capacity of arbitrary wireless networks within a bounded additive gap. This thesis contributes to the design, analysis and implementation of QMF relaying by optimizing its performance for small relay networks, proposing low-complexity iteratively decodable codes, and carrying out over-the-air experiments using software-radio testbeds to assess real-world potential and competitiveness. The original QMF scheme has each relay performing the same operation, agnostic to the network topology and the channel state information (CSI); this facilitates the analysis for arbitrary networks, yet comes at a performance penalty for small networks and medium SNR regimes. In this thesis, we demonstrate the benefits one can gain for QMF if we optimize its performance by leveraging topological and channel state information. We show that for the N-relay diamond network, by taking into account topological information, we can exponentially reduce the QMF additive approximation gap from Θ(N)\Theta(N) bits/s/Hz to Θ(logN)\Theta(\log N) bits/s/Hz, while for the one-relay and two-relay networks, use of topological information and CSI can help to gain as much as 66 dB. Moreover, we explore what benefits we can realize if we jointly optimize QMF and half-duplex scheduling, as well as if we employ hybrid schemes that combine QMF and Decode-and-Forward (DF) relay operations. To take QMF from being a purely information-theoretic idea to an implementable strategy, we derive a structure employing Low-Density-Parity-Check (LDPC) ensembles for the relay node operations and message-passing algorithms for decoding. We demonstrate through extensive simulation results over the full-duplex diamond network, that our designs offer a robust performance over fading channels and achieves the full diversity order of our network at moderate SNRs. Next, we explore the potential real-world impact of QMF and present the design and experimental evaluation of a wireless system that exploits relaying in the context of WiFi. We deploy three main competing strategies that have been proposed for relaying, Amplify-and-Forward (AF), DF and QMF, on the WarpLab software radio platform. We present experimental results--to the best of our knowledge, the first ones--that compare QMF, AF and DF in a realistic indoor setting. We find that QMF is a competitive scheme to the other two, offering in some cases up to 12% throughput benefits and up to 60% improvement in frame error-rates over the next best scheme. We then present a more advanced architecture for physical layer cooperation (termed QUILT), that seamlessly adapts to the underlying network configuration to achieve competitive or better performance than the best current approaches. It combines on-demand, opportunistic use of DF or QMF followed by interleaving at the relay, with hybrid decoding at the destination that extracts information from even potentially undecodable received frames. We theoretically quantify how our design choices affect the system performance. We also deploy QUILT on WarpLab and show through over-the-air experiments up to 55 times FER improvement over the next best cooperative protocol

    Manifold-Preserving Transformers are Effective for Short-Long Range Encoding

    Full text link
    Multi-head self-attention-based Transformers have shown promise in different learning tasks. Albeit these models exhibit significant improvement in understanding short-term and long-term contexts from sequences, encoders of Transformers and their variants fail to preserve layer-wise contextual information. Transformers usually project tokens onto sparse manifolds and fail to preserve mathematical equivalence among the token representations. In this work, we propose TransJect, an encoder model that guarantees a theoretical bound for layer-wise distance preservation between a pair of tokens. We propose a simple alternative to dot-product attention to ensure Lipschitz continuity. This allows TransJect to learn injective mappings to transform token representations to different manifolds with similar topology and preserve Euclidean distance between every pair of tokens in subsequent layers. Evaluations across multiple benchmark short- and long-sequence classification tasks show maximum improvements of 6.8% and 5.9%, respectively, over the variants of Transformers. Additionally, TransJect displays 79% better performance than Transformer on the language modeling task. We further highlight the shortcomings of multi-head self-attention from the statistical physics viewpoint. Although multi-head self-attention was incepted to learn different abstraction levels within the networks, our empirical analyses suggest that different attention heads learn randomly and unorderly. In contrast, TransJect adapts a mixture of experts for regularization; these experts are more orderly and balanced and learn different sparse representations from the input sequences. TransJect exhibits very low entropy and can be efficiently scaled to larger depths.Comment: 17 pages, 7 figures, 5 tables, Findings of the Association for Computational Linguistics: EMNLP202

    Persona-aware Generative Model for Code-mixed Language

    Full text link
    Code-mixing and script-mixing are prevalent across online social networks and multilingual societies. However, a user's preference toward code-mixing depends on the socioeconomic status, demographics of the user, and the local context, which existing generative models mostly ignore while generating code-mixed texts. In this work, we make a pioneering attempt to develop a persona-aware generative model to generate texts resembling real-life code-mixed texts of individuals. We propose a Persona-aware Generative Model for Code-mixed Generation, PARADOX, a novel Transformer-based encoder-decoder model that encodes an utterance conditioned on a user's persona and generates code-mixed texts without monolingual reference data. We propose an alignment module that re-calibrates the generated sequence to resemble real-life code-mixed texts. PARADOX generates code-mixed texts that are semantically more meaningful and linguistically more valid. To evaluate the personification capabilities of PARADOX, we propose four new metrics -- CM BLEU, CM Rouge-1, CM Rouge-L and CM KS. On average, PARADOX achieves 1.6 points better CM BLEU, 47% better perplexity and 32% better semantic coherence than the non-persona-based counterparts.Comment: 4 tables, 4 figure

    Inferring to C or not to C: Evolutionary games with Bayesian inferential strategies

    Full text link
    Strategies for sustaining cooperation and preventing exploitation by selfish agents in repeated games have mostly been restricted to Markovian strategies where the response of an agent depends on the actions in the previous round. Such strategies are characterized by lack of learning. However, learning from accumulated evidence over time and using the evidence to dynamically update our response is a key feature of living organisms. Bayesian inference provides a framework for such evidence-based learning mechanisms. It is therefore imperative to understand how strategies based on Bayesian learning fare in repeated games with Markovian strategies. Here, we consider a scenario where the Bayesian player uses the accumulated evidence of the opponent's actions over several rounds to continuously update her belief about the reactive opponent's strategy. The Bayesian player can then act on her inferred belief in different ways. By studying repeated Prisoner's dilemma games with such Bayesian inferential strategies, both in infinite and finite populations, we identify the conditions under which such strategies can be evolutionarily stable. We find that a Bayesian strategy that is less altruistic than the inferred belief about the opponent's strategy can outperform a larger set of reactive strategies, whereas one that is more generous than the inferred belief is more successful when the benefit-to-cost ratio of mutual cooperation is high. Our analysis reveals how learning the opponent's strategy through Bayesian inference, as opposed to utility maximization, can be beneficial in the long run, in preventing exploitation and eventual invasion by reactive strategies.Comment: 13 pages, 9 figure

    Efficient Subnetwork Selection in Relay Networks

    Get PDF
    We consider a source that would like to communicate with a destination over a layered Gaussian relay network. We present a computationally efficient method that enables to select a near-optimal (in terms of throughput) subnetwork of a given size connecting the source with the destination. Our method starts by formulating an integer optimization problem that maximizes the rates that the Quantize-Map-and-Forward relaying protocol can achieve over a selected subnetwork; we then relax the integer constraints to obtain a non-linear optimization over reals. For diamond networks, we prove that this optimization over reals is concave while for general layered networks we give empirical demonstrations of near-concavity, paving the way for efficient algorithms to solve the relaxed problem. We then round the relaxed solution to select a specific subnetwork. Simulations using off-the-shelf non-linear optimization algorithms demonstrate excellent performance with respect to the true integer optimum for both diamond networks as well as multi-layered networks. Even with these non-customized algorithms, significant time savings are observed vis-a-vis exhaustive integer optimization

    Switched Local Schedules for Diamond Networks

    Get PDF
    We consider a Gaussian diamond network where a source communicates with the destination through nn non-interfering half-duplex relays. We focus on half-duplex schedules that utilize only local channel state information, i.e., each relay has access to its incoming and outgoing channel realizations. We demonstrate that random independent switching, resulting in multiple listen-transmit sub cycles at each relay, while still respecting the overall locally optimal listen-transmit fractions, enables to approximately achieve at least 3/43/4 of the capacity of the 22-relay diamond network. With a single listen-transmit cycle, this fraction drops from 3/43/4 to 1/21/2. We also provide simulation results that point to the same fractions of capacity being retained over networks with more than 22 relays
    corecore